Gray level residual (GLR) field refers to the intensity differences between corresponding voxel points in the digital volume images acquired before and after deformation. Typically, internal damage in materials induces substantial variations in grayscale values between corresponding voxel points. Therefore, the GLR field helps to reveal the damage location. In the finite element-based global digital volume correlation (DVC) method, the GLR field, as the matching quality evaluation criteria, can be readily calculated and has been employed to characterize the evolution of internal cracks. However, the widely used subvolume-based local DVC, which can output displacement, strain, and correlation coefficient at discrete calculation points, cannot obtain the GLR directly. Compared with correlation coefficient and deformation information, the GLR field achieves voxelwise matching quality evaluation, thus demonstrating superior performance in visualizing internal damage. Therefore, accurate GLR calculation in local DVC is undoubtedly valuable in compensating for its shortcomings in fine-matching quality evaluation and expanding its applications in internal damage observation and localization.
The GLR field is obtained by subtracting the reference volume image from the deformed volume image after full-field correction. The key of its calculation is to utilize the continuous voxelwise data, including contrast and brightness correction coefficients and displacement, to correct the deformed volume image. In this work, a dense interpolation algorithm based on finite element mesh is adopted to estimate the voxelwise data within the volume of interest (VOI). 3D Delaunay triangulation algorithm is first utilized to generate tetrahedron element mesh from the discrete calculation points, and then the data of voxel points inside each tetrahedron element can be determined with the shape function of finite element. After acquiring the voxel-wise data of VOI within the reference volume image, the corrected deformed volume image can be reconstructed. Given that the corresponding voxel points in the deformed volume image normally fall into the subvoxel positions, a subvoxel intensity interpolation scheme is required during the calculation of correlation residual in local DVC. In this work, the advanced cubic B-spline interpolation method is adopted to estimate the grayscale of the corrected deformed volume image. In addition, a simulated mode I crack test and a tensile test of nodular cast iron are carried out to verify the feasibility of the GLR field based on local DVC and the reliability and robustness in damage observation and detection.
In simulated mode I crack test, the results show that the uncorrected GLR field still keeps a higher grayscale even in the region away from the crack compared with the corrected GLR field (Fig. 7), which degrades the damage observation and location. Therefore, contrast and brightness correction are necessary during the calculation of the GLR field. The crack plane can be detected clearly from the GLR field after threshold processing, and the position of the crack plane is very close to the preset value (Fig. 7). The proposed GLR based on local DVC effectively eliminates the influence of contrast and brightness changes and achieves precise crack location. Additionally, more information about the damage can be acquired from the GLR field. The crack morphology and orientation can be determined from the slice image at y=40 voxel in the real test. Besides, the debonding between the nodular graphite and matrix can also be detected roughly from the GLR field (Fig. 10). It should be noted that the GLR field after post-processing can only reflect the approximate morphology of damages and fails to reflect the opening of crack and debonding accurately since the interpolation used in displacement correlation may enlarge the region with damage. Despite all this, the location and morphology of damages extracted from the GLR field are helpful in understanding the fracture mechanics properties of nodular graphite cast iron.
A simple and practical method for GLR field calculation based on post-processing of local DVC measurements is proposed. The method addresses the limitations of existing local DVC in fine-matching quality evaluation. Compared with correlation coefficient and deformation information, the GLR field not only accurately reflects the location of internal damage but also facilitates visual observation of internal crack morphology and interface debonding behavior. It holds the potential for broader applications in visualizing and precisely locating internal damage within materials and structures.
The vision-based navigation and localization system of China's "Yutu" lunar rover is controlled by a ground teleoperation center. A large-spacing traveling mode with approximately 6-10 m per site is adopted for the rover to maximize the driving distance of the lunar rover and improve the efficiency of remote control exploration. This results in a significant distance between adjacent navigation sites, and considerable rotation, translation, and scale changes in the captured images. Furthermore, the low overlap between images and the vast differences in regional shapes, combined with weak texture and illumination variations on the lunar surface, pose challenges to image feature matching among different sites. Currently, the "Yutu" lunar rover employs inertial measurements and visual matches among different sites for navigation and positioning. The ground teleoperation center adopts inertial measurements as initial poses and optimizes the poses with visual matches by bundle adjustment to obtain the final rover poses. However, due to the wide baseline and significant surface changes of images at different sites, manual assistance is often required to filter or select the correct matches, significantly affecting the efficiency of the ground teleoperation center. Therefore, improving the robustness of image feature matching between different sites to achieve automatic visual positioning is an urgent problem to be addressed.
To address the poor performance and low success rate of current image matching algorithms in wide-baseline lunar images with weak textures and illumination variations, we propose a global attention-based lunar image matching algorithm by the view synthesis. First, we utilize sparse feature matching methods to generate sparse pseudo-ground-truth disparities for the rectified stereo lunar images at the same site. Next, we finetune a stereo matching network with these disparities and perform 3D reconstruction for the lunar images at the same site. Then, we leverage inertial measurements among different sites to convert the original image into a new synthetic view for matching based on the scene depth, addressing the low overlap and large viewpoint changes among images of different sites. Additionally, we adopt a Transformer-based image matching network to improve matching performance in weak-texture scenes, and an outlier rejection method that considers plane degeneration in the post-processing stage. Finally, the matches are returned from the synthetic image to the original image, yielding the matches for wide-baseline lunar images at different sites.
We conduct experiments on the real lunar dataset from the "Yutu 2" lunar rover (referred to as the Moon dataset), which includes two parts. The first part is stereo images from five continuous stations (employed for stereo reconstruction), and the second is 12 sets of wide-baseline lunar images from adjacent sites (for wide-baseline image matching testing). In terms of lunar 3D reconstruction, we calculate the reconstruction error within different distance ranges, where the reconstruction network GwcNet (Moon) yields the best reconstruction accuracy and reconstruction details, as shown in Table 1 and Fig. 4. Meanwhile, Fig. 5 illustrates the synthetic images obtained from the view synthesis scheme based on the inertial measurements between sites and the scene depth, which solves the large rotation, translation, and scale changes between adjacent sites. For wide-baseline image matching, existing algorithms such as LoFTR and ASIFT have matching success rates of 33.33% and 16.67% respectively as shown in Table 2. Our DepthWarp-LoFTR algorithm achieves a matching success rate of 83.33%, significantly improving the matching success rate and accuracy of wide-baseline lunar images (Table 3). Additionally, this algorithm increases the matching success rate from 16.67% to 41.67% compared to the ASIFT algorithm. We present the matching results of different algorithms in Fig. 7, where DepthWarp-LoFTR obtains more consistent and denser matching results compared to other methods.
We propose a robust feature matching method DepthWarp-LoFTR for wide-baseline lunar images. For stereo images captured at the same site, the sparse disparities are generated through a sparse feature matching algorithm. These disparities serve as pseudo-ground truth to train the GwcNet network for 3D reconstruction of lunar images at the same site. To handle the wide baseline and low overlap of images from different sites, we propose a view synthesis algorithm based on scene depth and inertial prior poses. Image matching is performed on the synthesized current-site image and the next-site image to reduce the feature matching difficulty. For the feature matching stage, we adopt a Transformer-based LoFTR network, which significantly improves the success rate and accuracy of automatic matching. Our experimental results on real lunar datasets demonstrate that the proposed algorithm greatly improves the success rate of feature matching in complex lunar wide-baseline scenes. This lays a solid foundation for automatic visual positioning of the "Yutu 2" lunar rover and subsequent routine patrols of lunar rovers in China's fourth lunar exploration phase.
Optical coherence tomography (OCT) is a pivotal biomedical imaging technique based on the low coherence interference principle. It facilitates the production of tomographic scans of biological tissues, extensively applied to medical fields such as ophthalmology and dermatology. However, the pursuit of heightened axial resolution compels OCT systems to harness broadband light sources, and it is an approach that inadvertently introduces dispersion effects and gives rise to imaging artifacts, blurring, and consequently diminished image quality. Therefore, it is necessary to conduct dispersion compensation in OCT systems. While hardware-based compensation techniques are plagued by increased costs and complexity, their efficacy remains limited, which spurs the exploration and application of more flexible dispersion compensation algorithms. However, commonly employed algorithms based on search strategies suffer from suboptimal adaptability and concealed computational intricacies. Thus, we introduce an innovative dispersion compensation algorithm established based on the concept of spatial pulse degradation resulting from dispersion. The algorithm integrated into frequency domain OCT system experiments eliminates the requirements for manual dispersion range adjustments. Meanwhile, it features notable computational efficiency to offset the shortcomings of conventional search strategies in adaptability and computational efficacy. The proposed method is proven to be instrumental in enhancing the engineering practicality of OCT systems and improving the quality of tomographic images.
We propose an efficient dispersion compensation algorithm grounded in spatial pulse degradation due to dispersion and apply it to frequency domain OCT system experiments. The algorithm consists of two parts including dispersion extraction and compensation. By adopting the principle that dispersion causes widening spatial pulse, the algorithm estimates the dispersion of the signal to be corrected and subsequently applies compensation. A linear equation establishes the relationship between the square of spatial pulse width and the square of second-order dispersion. Additional dispersion phases are generated numerically and integrated into the original spectral signal to yield new dispersion signals. After transformation to the spatial domain, these signals' spatial pulse widths are measured. By substituting these pulse width values into the equation set, the second-order dispersion of the original signal can be calculated. Finally, a dispersion compensation phase is constructed and incorporated into the original spectral signal's phase for dispersion correction.
To validate the efficacy of this algorithm, we devise a swept source OCT (SS-OCT) system for data collection. The method is applied to correct dispersion in the point spread function (PSF) of the system and biological tissue images. The experimental results show that the algorithm's dispersion estimates exhibit a relative error of less than 10% when compared to actual dispersion values in different dispersion conditions (Table 1). After implementing this algorithm for dispersion compensation, notable enhancements are observed in the system's peak signal-to-noise ratio and axial resolution. In scenarios of similar correction efficiency, this algorithm surpasses the commonly employed iterative method by a factor of 5 in terms of speed and outpaces the fractional Fourier transform method by a remarkable 50-fold (Table 2). Furthermore, after applying dispersion compensation, the image quality is notably improved. The grape flesh image boundaries exhibit enhanced sharpness, with significantly enhanced internal tissue clarity and more concentrated image energy (Fig. 4). Additionally, human retinal images display clearer layer differentiation, accompanied by image contrast improvement (Fig. 5). These results collectively prove the algorithm's efficacy in enhancing image quality.
We introduce a novel high-efficiency dispersion compensation algorithm grounded in spatial pulse width. The algorithm mitigates axial broadening in PSF and enhances the system's signal-to-noise ratio. Notably, the algorithm's strength lies in its independence from prior knowledge about system dispersion or manual dispersion search interval selection. It accurately estimates system dispersion, and when compared with other search strategy-based algorithms, it demonstrates superior computing efficiency and achieves comparable compensation efficacy. The dispersion compensation experiments conducted on grape pulp and human retinal images yield effective results. The algorithm suppresses axial broadening blur, amplifies image contrast, and elucidates intricate structural features within biological tissues. These outcomes underscore the algorithm's capacity to proficiently rectify dispersion issues in OCT systems, thereby enhancing visual image quality. Nevertheless, certain limitations deserve consideration. Primarily, the algorithm's applicability is confined to addressing second-order dispersion, and higher-order dispersion tackling necessitates further exploration into the numerical relationship between spatial pulse distortion and higher-order dispersion. Furthermore, the algorithm exclusively addresses system dispersion, ignoring sample dispersion intricacies tied to specific sample structures and depths. Future research should explore depth-adaptive sample dispersion compensation, and leverage the algorithm's high computational efficiency to potentially enable depth-dependent dispersion compensation.